93 research outputs found

    Learning to push and learning to move: The adaptive control of contact forces

    Get PDF
    To be successful at manipulating objects one needs to apply simultaneously well controlled movements and contact forces. We present a computational theory of how the brain may successfully generate a vast spectrum of interactive behaviors by combining two independent processes. One process is competent to control movements in free space and the other is competent to control contact forces against rigid constraints. Free space and rigid constraints are singularities at the boundaries of a continuum of mechanical impedance. Within this continuum, forces and motions occur in \u201ccompatible pairs\u201d connected by the equations of Newtonian dynamics. The force applied to an object determines its motion. Conversely, inverse dynamics determine a unique force trajectory from a movement trajectory. In this perspective, we describe motor learning as a process leading to the discovery of compatible force/motion pairs. The learned compatible pairs constitute a local representation of the environment's mechanics. Experiments on force field adaptation have already provided us with evidence that the brain is able to predict and compensate the forces encountered when one is attempting to generate a motion. Here, we tested the theory in the dual case, i.e., when one attempts at applying a desired contact force against a simulated rigid surface. If the surface becomes unexpectedly compliant, the contact point moves as a function of the applied force and this causes the applied force to deviate from its desired value. We found that, through repeated attempts at generating the desired contact force, subjects discovered the unique compatible hand motion. When, after learning, the rigid contact was unexpectedly restored, subjects displayed after effects of learning, consistent with the concurrent operation of a motion control system and a force control system. Together, theory and experiment support a new and broader view of modularity in the coordinated control of forces and motions

    The separate neural control of hand movements and contact forces

    Get PDF
    To manipulate an object, we must simultaneously control the contact forces exerted on the object and the movements of our hand. Two alternative views for manipulation have been proposed: one in which motions and contact forces are represented and controlled by separate neural processes, and one in which motions and forces are controlled jointly, by a single process. To evaluate these alternatives, we designed three tasks in which subjects maintained a specified contact force while their hand was moved by a robotic manipulandum. The prescribed contact force and hand motions were selected in each task to induce the subject to attain one of three goals: (1) exerting a regulated contact force, (2) tracking the motion of the manipulandum, and (3) attaining both force and motion goals concurrently. By comparing subjects' performances in these three tasks, we found that behavior was captured by the summed actions of two independent control systems: one applying the desired force, and the other guiding the hand along the predicted path of the manipulandum. Furthermore, the application of transcranial magnetic stimulation impulses to the posterior parietal cortex selectively disrupted the control of motion but did not affect the regulation of static contact force. Together, these findings are consistent with the view that manipulation of objects is performed by independent brain control of hand motions and interaction forces

    Autoencoder-based myoelectric controller for prosthetic hands

    Get PDF
    In the past, linear dimensionality-reduction techniques, such as Principal Component Analysis, have been used to simplify the myoelectric control of high-dimensional prosthetic hands. Nonetheless, their nonlinear counterparts, such as Autoencoders, have been shown to be more effective at compressing and reconstructing complex hand kinematics data. As a result, they have a potential of being a more accurate tool for prosthetic hand control. Here, we present a novel Autoencoder-based controller, in which the user is able to control a high-dimensional (17D) virtual hand via a low-dimensional (2D) space. We assess the efficacy of the controller via a validation experiment with four unimpaired participants. All the participants were able to significantly decrease the time it took for them to match a target gesture with a virtual hand to an average of 6.9s and three out of four participants significantly improved path efficiency. Our results suggest that the Autoencoder-based controller has the potential to be used to manipulate high-dimensional hand systems via a myoelectric interface with a higher accuracy than PCA; however, more exploration needs to be done on the most effective ways of learning such a controller

    Learning Redundant Motor Tasks With and Without Overlapping Dimensions: Facilitation and Interference Effects

    Get PDF
    Prior learning of a motor skill creates motor memories that can facilitate or interfere with learning of new, but related, motor skills. One hypothesis of motor learning posits that for a sensorimotor task with redundant degrees of freedom, the nervous system learns the geometric structure of the task and improves performance by selectively operating within that task space. We tested this hypothesis by examining if transfer of learning between two tasks depends on shared dimensionality between their respective task spaces. Human participants wore a data glove and learned to manipulate a computer cursor by moving their fingers. Separate groups of participants learned two tasks: a prior task that was unique to each group and a criterion task that was common to all groups. We manipulated the mapping between finger motions and cursor positions in the prior task to define task spaces that either shared or did not share the task space dimensions (x-y axes) of the criterion task. We found that if the prior task shared task dimensions with the criterion task, there was an initial facilitation in criterion task performance. However, if the prior task did not share task dimensions with the criterion task, there was prolonged interference in learning the criterion task due to participants finding inefficient task solutions. These results show that the nervous system learns the task space through practice, and that the degree of shared task space dimensionality influences the extent to which prior experience transfers to subsequent learning of related motor skills

    Sensory Motor Remapping of Space in Human-Machine Interfaces

    Get PDF
    Studies of adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. These studies have also pointed out that adaptation to novel dynamics is aimed at preserving the trajectories of a controlled endpoint, either the hand of a subject or a transported object. We review some of these experiments and present more recent studies aimed at understanding how the motor system forms representations of the physical space in which actions take place. An extensive line of investigations in visual information processing has dealt with the issue of how the Euclidean properties of space are recovered from visual signals that do not appear to possess these properties. The same question is addressed here in the context of motor behavior and motor learning by observing how people remap hand gestures and body motions that control the state of an external device. We present some theoretical considerations and experimental evidence about the ability of the nervous system to create novel patterns of coordination that are consistent with the representation of extrapersonal space. We also discuss the perspective of endowing human–machine interfaces with learning algorithms that, combined with human learning, may facilitate the control of powered wheelchairs and other assistive devices

    Tactile Proprioceptive Input in Robotic Rehabilitation after Stroke

    Get PDF
    Stroke can lead to loss or impairment of somatosensory sensation (i.e. proprioception), that reduces functional control of limb movements. Here we examine the possibility of providing artificial feedback to make up for lost sensory information following stroke. However, it is not clear whether this kind of sensory substitution is even possible due to stroke-related loss of central processing pathways that subserve somatosensation. In this paper we address this issue in a small cohort of stroke survivors using a tracking task that emulates many activities of daily living. Artificial proprioceptive information was provided to the subjects in the form of vibrotactile cues. The goal was to assist participants in guiding their arm towards a moving target on the screen. Our experiment indicates reliable tracking accuracy under the effect of vibrotactile proprioceptive feedback, even in subjects with impaired natural proprioception. This result is promising and can create new directions in rehabilitation robotics with augmented somatosensory feedback

    Learning new movements after paralysis: Results from a home-based study

    Get PDF
    open8siBody-machine interfaces (BMIs) decode upper-body motion for operating devices, such as computers and wheelchairs. We developed a low-cost portable BMI for survivors of cervical spinal cord injury and investigated it as a means to support personalized assistance and therapy within the home environment. Depending on the specific impairment of each participant, we modified the interface gains to restore a higher level of upper body mobility. The use of the BMI over one month led to increased range of motion and force at the shoulders in chronic survivors. Concurrently, subjects learned to reorganize their body motions as they practiced the control of a computer cursor to perform different tasks and games. The BMI allowed subjects to generate any movement of the cursor with different motions of their body. Through practice subjects demonstrated a tendency to increase the similarity between the body motions used to control the cursor in distinct tasks. Nevertheless, by the end of learning, some significant and persistent differences appeared to persist. This suggests the ability of the central nervous system to concurrently learn operating the BMI while exploiting the possibility to adapt the available mobility to the specific spatio-temporal requirements of each task.openPierella, Camilla; Abdollahi, Farnaz; Thorp, Elias; Farshchiansadegh, Ali; Pedersen, Jessica; Seanez-Gonzalez, Ismael; Mussa-Ivaldi, Ferdinando A.; Casadio, MauraPierella, Camilla; Abdollahi, Farnaz; Thorp, Elias; Farshchiansadegh, Ali; Pedersen, Jessica; Seanez-Gonzalez, Ismael; Mussa-Ivaldi, Ferdinando A.; Casadio, Maur
    • …
    corecore